首页> 外文OA文献 >Perception of co-speech gestures in aphasic patients: A visual exploration study during the observation of dyadic conversations
【2h】

Perception of co-speech gestures in aphasic patients: A visual exploration study during the observation of dyadic conversations

机译:失语症患者的同声手势知觉:二元对话观察期间的视觉探索研究

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies.Methods: Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body.Results: Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls.Conclusion: Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face.
机译:在对话过程中,同语音手势是非语言交流的一部分。他们要么支持口头信息,要么向对话者提供其他信息。此外,他们以非言语提示提示转弯的合作过程。在本研究中,我们调查了同语手势对失语症患者二元对话知觉的影响。特别是,我们分析了共语音手势对凝视方向(朝向说话者或听者)和身体部位固定的影响。我们假设言语理解受到限制的失语症患者会采用他们的视觉探索策略。方法:16名失语症患者和23名健康对照者参加了这项研究。当受试者正在观看描述两个人之间自发对话的视频时,通过非接触式红外眼动仪测量视觉探索行为。计算了共语音手势(当前和不存在),注视方向(说话者或听者)以及感兴趣区域(ROI)(包括手,脸和身体)的因素的累积注视持续时间和平均注视持续时间。结果:失语症患者和健康对照者都主要固定说话者的脸。我们发现了显着的共语音手势×ROI交互作用,这表明共语音手势的存在会鼓励主体看着说话者。此外,有明显的注视方向×投资回报率×小组互动,表明失语症患者与健康对照组相比,说话者脸上的固定时间减少。结论:协同语音手势将观察者的注意力引向说话者,这是语义输入的来源。讨论了潜在的语义处理缺陷还是整合视听信息的缺陷可能会导致失语症患者少说话者的面部表情。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号